5,473 research outputs found

    GAP Safe screening rules for sparse multi-task and multi-class models

    Full text link
    High dimensional regression benefits from sparsity promoting regularizations. Screening rules leverage the known sparsity of the solution by ignoring some variables in the optimization, hence speeding up solvers. When the procedure is proven not to discard features wrongly the rules are said to be \emph{safe}. In this paper we derive new safe rules for generalized linear models regularized with â„“1\ell_1 and â„“1/â„“2\ell_1/\ell_2 norms. The rules are based on duality gap computations and spherical safe regions whose diameters converge to zero. This allows to discard safely more variables, in particular for low regularization parameters. The GAP Safe rule can cope with any iterative solver and we illustrate its performance on coordinate descent for multi-task Lasso, binary and multinomial logistic regression, demonstrating significant speed ups on all tested datasets with respect to previous safe rules.Comment: in Proceedings of the 29-th Conference on Neural Information Processing Systems (NIPS), 201

    Damage location method for thin composites structures - application to an aircraft door

    Get PDF
    Piezoelectric sensors are widely used for Structure Health Monitoring (SHM) technique due to their high-frequency capability. In particular, electromechanical impedance (EMI) techniques give simple and low cost solutions for detecting damage in composite structures. For example, damage indicators computed from EMI deviations between the pristine structure and the damaged structure can be compared to a threshold in order to point damage. When it is question of damage localization, the simple analysis of the electromechanical impedance fails to furnish enough information. We propose a method based both on EMI damage indicators and on the acoustic attenuation level to locate damage. One of the main advantages of our method, so called data driven method, is that only experimental data are used as inputs for our algorithms. It does not rely on any model

    Job Turnover, Unemployment and Labor Market Institutions

    Get PDF
    This paper studies the role of labor market institutions on unemployment and on the cyclical properties of job flows. We construct an intertemporal general equilibrium model with search unemployment and endogenous job turnover, and examine the consequences of introducing an unemployment benefit, a firing cost and a downward wage rigidity. The simulations suggest that downward wage rigidities, rather than unemployment benefit or firing cost, may well play a dominant role in explaining both the high unemployment rate and the job flows dynamics of such an economy.Unemployment, Job flows dynamics, Institutions

    Efficient Smoothed Concomitant Lasso Estimation for High Dimensional Regression

    Full text link
    In high dimensional settings, sparse structures are crucial for efficiency, both in term of memory, computation and performance. It is customary to consider â„“1\ell_1 penalty to enforce sparsity in such scenarios. Sparsity enforcing methods, the Lasso being a canonical example, are popular candidates to address high dimension. For efficiency, they rely on tuning a parameter trading data fitting versus sparsity. For the Lasso theory to hold this tuning parameter should be proportional to the noise level, yet the latter is often unknown in practice. A possible remedy is to jointly optimize over the regression parameter as well as over the noise level. This has been considered under several names in the literature: Scaled-Lasso, Square-root Lasso, Concomitant Lasso estimation for instance, and could be of interest for confidence sets or uncertainty quantification. In this work, after illustrating numerical difficulties for the Smoothed Concomitant Lasso formulation, we propose a modification we coined Smoothed Concomitant Lasso, aimed at increasing numerical stability. We propose an efficient and accurate solver leading to a computational cost no more expansive than the one for the Lasso. We leverage on standard ingredients behind the success of fast Lasso solvers: a coordinate descent algorithm, combined with safe screening rules to achieve speed efficiency, by eliminating early irrelevant features
    • …
    corecore